Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Write your Algorithm
  • Step 6: Test Your Algorithm

Step 0: Import Datasets

Make sure that you've downloaded the required human and dog datasets:

  • Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.

  • Download the human dataset. Unzip the folder and place it in the home diretcory, at location /lfw.

Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.

In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.

In [1]:
# Install cv2 library
!pip install -q opencv-python

# Download OpenCV's implementations of Haar feature-based cascade classifiers to detect human faces in images
haarcascades_baseurl = 'https://raw.githubusercontent.com/opencv/opencv/master/data/'
haarcascades_types = ['haarcascades/haarcascade_frontalface_alt.xml',
                      'haarcascades/haarcascade_frontalface_alt2.xml',
                      'haarcascades/haarcascade_frontalface_alt_tree.xml',
                      'haarcascades/haarcascade_frontalface_default.xml']

for haarcascades_type in haarcascades_types:
    !wget -q '{haarcascades_baseurl}{haarcascades_type}' -P haarcascades
In [2]:
# Download the dog images
!wget -q https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip
!unzip -q dogImages.zip
In [3]:
# Download the human images
!wget -q https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip
!unzip -q lfw.zip lfw/*/*
In [4]:
# Import all dependencies for this notebook
%matplotlib inline

import numpy as np
from glob import glob
import cv2
from tqdm import tqdm

import matplotlib.pyplot as plt

from PIL import Image, ImageFile

import torch
from torch import nn, optim
from torchvision import models, transforms, datasets

import sys
import os

from google.colab import drive
In [5]:
gdrive_dir = '/gdrive'
drive.mount(gdrive_dir)
gdrive_dir = gdrive_dir+'/My Drive/Colab Notebooks/'
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code

Enter your authorization code:
··········
Mounted at /gdrive
In [6]:
# load filenames for human and dog images
human_files = np.array(glob("lfw/*/*"))
dog_files = np.array(glob("dogImages/*/*/*"))

# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
There are 13233 total human images.
There are 8351 total dog images.

Step 1: Detect Humans

In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [7]:
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier(haarcascades_types[0])

# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [8]:
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer:
Human faces detected in human images: 100.0%
Human faces detected in dog images: 12.0%

In [9]:
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]

#-#-# Do NOT modify the code above this line. #-#-#

## DONE: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.

detections_in_human_files = np.mean([face_detector(img) for img in human_files_short]) * 100
print(f'Human faces detected in human images\t{detections_in_human_files:.2f}%')

detections_in_dog_files = np.mean([face_detector(img) for img in dog_files_short]) * 100
print(f'Human faces detected in dog images\t{detections_in_dog_files:.2f}%')
Human faces detected in human images	100.00%
Human faces detected in dog images	12.00%

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [10]:
### (Optional) 
### DONE: Test performance of another face detection algorithm.
### Feel free to use as many code cells as needed.
for haarcascades_type in haarcascades_types:
    print('----- Testing', haarcascades_type)
    
    face_cascade = cv2.CascadeClassifier(haarcascades_type)

    detections_in_human_files = np.mean([face_detector(img) for img in human_files_short]) * 100
    print(f'Human faces detected in human images\t{detections_in_human_files:.2f}%')

    detections_in_dog_files = np.mean([face_detector(img) for img in dog_files_short]) * 100
    print(f'Human faces detected in dog images\t{detections_in_dog_files:.2f}%')
    print()

# Load the "haarcascade_frontalface_alt.xml"
# since it performs better than the other tested classifiers.
face_cascade = cv2.CascadeClassifier(haarcascades_types[0])
----- Testing haarcascades/haarcascade_frontalface_alt.xml
Human faces detected in human images	100.00%
Human faces detected in dog images	12.00%

----- Testing haarcascades/haarcascade_frontalface_alt2.xml
Human faces detected in human images	99.00%
Human faces detected in dog images	22.00%

----- Testing haarcascades/haarcascade_frontalface_alt_tree.xml
Human faces detected in human images	59.00%
Human faces detected in dog images	2.00%

----- Testing haarcascades/haarcascade_frontalface_default.xml
Human faces detected in human images	100.00%
Human faces detected in dog images	41.00%


Step 2: Detect Dogs

In this section, we use a pre-trained model to detect dogs in images.

Obtain Pre-trained VGG-16 Model

The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.

In [11]:
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)

# set model to evaluation mode
VGG16.eval()

# do not track gradients for the model parameters
for param in VGG16.parameters():
    param.requires_grad = False

# check if CUDA is available
use_cuda = torch.cuda.is_available()

# move model to GPU if CUDA is available
if use_cuda:
    VGG16 = VGG16.cuda()
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /root/.torch/models/vgg16-397923af.pth
553433881it [00:06, 87967636.71it/s]

Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.

(IMPLEMENTATION) Making Predictions with a Pre-trained Model

In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.

Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.

In [12]:
def VGG16_predict(img_path):
    '''
    Use pre-trained VGG-16 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to VGG-16 model's prediction
    '''
    
    ## DONE: Complete the function.
    ## Load and pre-process an image from the given img_path
    compose = transforms.Compose([
        transforms.Resize(224),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406],
                             std=[0.229, 0.224, 0.225])
    ])
    
    image = Image.open(img_path)

    image = compose(image)
    image = image[None,:]
    
    if use_cuda:
        image = image.cuda()
        
    ## Return the *index* of the predicted class for that image
    prediction = VGG16(image)
    
    # predicted class index
    _, top_class = prediction.topk(1)
    return top_class.item()

(IMPLEMENTATION) Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).

Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [13]:
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
    ## DONE: Complete the function.
    
    prediction = VGG16_predict(img_path)
    
    # true/false
    return prediction >= 151 and prediction <= 268

(IMPLEMENTATION) Assess the Dog Detector

Question 2: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:
Dogs detected in human images 0.0%
Dogs detected in dog images 100.0%

In [14]:
### DONE: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
detections_in_human_files = np.mean([dog_detector(img) for img in human_files_short]) * 100
print(f'Dogs detected in human images\t{detections_in_human_files:.2f}%')

detections_in_dog_files = np.mean([dog_detector(img) for img in dog_files_short]) * 100
print(f'Dogs detected in dog images\t{detections_in_dog_files:.2f}%')
Dogs detected in human images	0.00%
Dogs detected in dog images	100.00%

We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.

In [15]:
### (Optional) 
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.

Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!

In [16]:
### DONE: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes

# avoid PIL to raise exception if find a truncated image
ImageFile.LOAD_TRUNCATED_IMAGES = True

# folder where the images are
data_dir = 'dogImages/'

# how many subprocesses to use for data loading
num_workers = 0 # data will be loaded in the main thread

# arrays to normalization
normalize_mean = np.array([0.485, 0.456, 0.406])
normalize_std = np.array([0.229, 0.224, 0.225])

## Specify transforms
data_transforms = {}

# transforms to train data set
data_transforms['train'] = transforms.Compose([
    transforms.Resize(480),
    transforms.RandomRotation(30),
    transforms.RandomResizedCrop(224, scale=(224./480, 224./256), ratio=(1.,1.)),
    transforms.RandomHorizontalFlip(p=0.5),
    transforms.ToTensor(),
    transforms.Normalize(
        normalize_mean,
        normalize_std)
    ])

# transforms to valid data set
data_transforms['valid_test'] = transforms.Compose([
    transforms.Resize(224),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(
        normalize_mean,
        normalize_std)
    ])

## Load the datasets with ImageFolder
image_datasets = {}
image_datasets['train_data'] = datasets.ImageFolder(data_dir + '/train', transform=data_transforms['train'])
image_datasets['valid_data'] = datasets.ImageFolder(data_dir + '/valid', transform=data_transforms['valid_test'])
image_datasets['test_data'] = datasets.ImageFolder(data_dir + '/test', transform=data_transforms['valid_test'])

## Using the image datasets and the transforms, define the dataloaders
dataloaders = {}
dataloaders['train_data'] = torch.utils.data.DataLoader(image_datasets['train_data'], batch_size=96,  shuffle=True, num_workers=num_workers)
dataloaders['valid_data'] = torch.utils.data.DataLoader(image_datasets['valid_data'], batch_size=167, shuffle=False, num_workers=num_workers)
dataloaders['test_data']  = torch.utils.data.DataLoader(image_datasets['test_data'],  batch_size=64,  shuffle=False, num_workers=num_workers)

print(f"Train data: {len(dataloaders['train_data'].dataset)} images / {len(dataloaders['train_data'])} batches")
print(f"Valid data: {len(dataloaders['valid_data'].dataset)} images / {len(dataloaders['valid_data'])} batches")
print(f"Test  data: {len(dataloaders['test_data'].dataset) } images / {len(dataloaders['test_data']) } batches")
Train data: 6680 images / 70 batches
Valid data: 835 images / 5 batches
Test  data: 836 images / 14 batches

Question 3: Describe your chosen procedure for preprocessing the data.

  • How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
  • Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?

Answer:

Image size
I chose to work with an input image of 224x224 pixels which allows me to perform five downsizings (stride = 2), resulting in 7x7 tensors at the end of the convolutional layers. Also, that is not a too large image, which allows a faster training. Likewise, that is not a too small image, what still keep the relevant information.

Resizing the image
I perform the following steps to obtain the 224x224 pixels input image:

For the training data set:

  • Resize the image making its smaller side being 480 px, keeping the original aspect ratio.
  • Make a random crop of 224x224 px, with a scale in the range [224/480, 224/256], maintaining the original aspect ratio. Making the crop in this way is the same as resizing the smaller side of the input image to being in the range [256, 480] px before performing the 224x224 px crop.

For the validation and testing data set:

  • Resize the image making its smaller edge 224 px, keeping the original aspect ratio.
  • Make a center crop of 224x224px, maintaining the original aspect ratio.

Data augmentation
I build my data augmentation method based on that I have found in the ResNet paper. Beyond that, I also added rotations. I use data augmentation only for the training data set.

  • Translations: I make a random crop of 224x224 px in the resized original image. The original image is resized with its shorter side randomly sampled in the range [256, 480] px.
  • Flips: A horizontal flip has a 50% chance of being performed.
  • Rotations: The image is rotated in a range of [-30, +30] degrees.

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. Use the template in the code cell below.

In [17]:
# define the CNN architecture
class Net(nn.Module):
    ### DONE: choose an architecture, and complete the class
    def __init__(self):
        super(Net, self).__init__()
        ## Define layers of a CNN
        
        ## Convolutional layers
        # in: 224x224    out: 112x112
        self.conv1    = nn.Conv2d(    3,  64, 7, padding=3, stride=2)
        self.conv1_bn = nn.BatchNorm2d(   64)
        
        # in: 112x112    out: 56x56
        self.conv2_1    = nn.Conv2d( 64,  64, 3, padding=1)
        self.conv2_1_bn = nn.BatchNorm2d( 64)
        self.conv2_2    = nn.Conv2d( 64,  64, 3, padding=1)
        self.conv2_2_bn = nn.BatchNorm2d( 64)
        
        # in: 56x56    out: 28x28
        self.conv3_in   = nn.Conv2d( 64, 128, 1, padding=1)
        self.conv3_1    = nn.Conv2d(128, 128, 3, padding=1)
        self.conv3_1_bn = nn.BatchNorm2d(128)
        self.conv3_2    = nn.Conv2d(128, 128, 3, padding=1)
        self.conv3_2_bn = nn.BatchNorm2d(128)
        
        # in: 28x28    out: 14x14
        self.conv4_in   = nn.Conv2d(128, 256, 1, padding=1)
        self.conv4_1    = nn.Conv2d(256, 256, 3, padding=1)
        self.conv4_1_bn = nn.BatchNorm2d(256)
        self.conv4_2    = nn.Conv2d(256, 256, 3, padding=1)
        self.conv4_2_bn = nn.BatchNorm2d(256)
        
        # in: 14x14    out: 7x7
        self.conv5_in   = nn.Conv2d(256, 512, 1, padding=1)
        self.conv5_1    = nn.Conv2d(512, 512, 3, padding=1)
        self.conv5_1_bn = nn.BatchNorm2d(512)
        self.conv5_2    = nn.Conv2d(512, 512, 3, padding=1)
        self.conv5_2_bn = nn.BatchNorm2d(512)
        
        # Downsizing
        self.maxpool = nn.MaxPool2d(          3, padding=1, stride=2)
        
        # in: 7x7    out: 1x1
        self.avgpool = nn.AvgPool2d(7)
        
        # Activation
        self.activation = nn.ReLU()
        
        # Fully connected layers
        self.fc = nn.Linear(512,133)
                                
        
    def forward(self, x):
        ## Define forward behavior
        
        ## Stack 1
        x = self.conv1(x)
        x = self.conv1_bn(x)
        x = self.activation(x)
        
        ## Stack 2
        shortcut = x
        
        x = self.conv2_1(x)
        x = self.conv2_1_bn(x)
        x = self.activation(x)
        
        x = self.conv2_2(x)
        x = self.conv2_2_bn(x)
        x = self.activation(x)
        
        x += shortcut
            
        x = self.maxpool(x)
        
        ## Stack 3
        x = self.conv3_in(x)
        x = self.activation(x)
        
        shortcut = x
        
        x = self.conv3_1(x)
        x = self.conv3_1_bn(x)
        x = self.activation(x)
        
        x = self.conv3_2(x)
        x = self.conv3_2_bn(x)
        x = self.activation(x)
        
        x += shortcut
        
        x = self.maxpool(x)
        
        ## Stack 4
        x = self.conv4_in(x)
        x = self.activation(x)
        
        shortcut = x
        
        x = self.conv4_1(x)
        x = self.conv4_1_bn(x)
        x = self.activation(x)
        
        x = self.conv4_2(x)
        x = self.conv4_2_bn(x)
        x = self.activation(x)
        
        x += shortcut
        
        x = self.maxpool(x)
        
        ## Stack 5
        x = self.conv5_in(x)
        x = self.activation(x)
        
        shortcut = x
        
        x = self.conv5_1(x)
        x = self.conv5_1_bn(x)
        x = self.activation(x)
        
        x = self.conv5_2(x)
        x = self.conv5_2_bn(x)
        x = self.activation(x)
        
        x += shortcut
            
        x = self.maxpool(x)
        
        x = self.avgpool(x)
        
        # Classifier
        x = x.squeeze()
        x = self.fc(x)
        
        return x

#-#-# You so NOT have to modify the code below this line. #-#-#

# instantiate the CNN
model_scratch = Net()

# move tensors to GPU if CUDA is available
use_cuda = torch.cuda.is_available()
if use_cuda:
    model_scratch.cuda()

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.

Answer:

I developed my final CNN architecture inspired by the ResNet paper.

At first, I chose to start with a plain VGG-like model, with 5 convolutional layers and 3x3 filters, increasing the depth within each layer: 8, 16, 32, 64 and 128 filters.
That model reached 18% accuracy (156 of 836 test images).

Then, I decided I should add more layers, still following the VGG proposal, intending to get better performance. I doubled the number of convolutional layers, keeping the 3x3 filters, and I increased the depth to 32, 64, 128, 256 and 512 filters.
That model became heavy, and the training phase had been taking a long time to converge.

With my VGG-like model, I was facing the problem: a shallower network had not been performing good enough, and a deeper network had been converging too slow.
Thus, I went back to my researches to build a better architecture.

In the ResNet paper, I've found a path to speed the network convergence while increasing the depth: the Deep Residual Learning. The paper describes how making use of shortcuts can allow the network to backpropagate the error to the earlier layers, avoiding the vanishing gradient problem.

So, after studying the ResNet paper, I started modeling my final CNN architecture, as described below.

I converted my 5 convolutional layers into 5 convolutional stacks.

The first convolutional stack (named 'Conv1') generates 64 filters, 7x7 wide each, with a stride of 2, which reduces the input size by half.

The second convolutional stack (named 'Conv2') keeps the number of filters in 64, but has two convolutional layers and reduces the input size by half by using a max pooling layer with a stride of 2.

Here, the deep residual learning concept is introduced by adding shortcut connections between the stack input and the stack output.
After applying the last activation function over the last convolutional layer, the stack output receives the shortcut. Thus, the output of each stack would be written as y = f(x) + x.
Shortcuts have expedited the training of my network, allowing a deeper network architecture.

The next 3 stacks (named Conv3, Conv4, Conv5) follow the same principle of Conv2, but each stack doubles the number of filters at the beginning, by using 1x1 filters.

The last convolutional stack (Conv5) produces 512 filters, 7x7 wide each one. The output is given by a 7x7 average pool, generating 1 value for each filter; that means 1 vector with 512 positions. The classifier takes that vector and generates the dog breed predictions.

After some tests with that architecture, I reasoned that I had mitigated the vanishing gradient problem by using shortcuts, but the exploding gradients problem had taken place.
To solve that, I introduced batch normalization after each convolutional layer. Then, my network could converge to a better result.

My final CNN architecture has reached 74% accuracy (623 of 836 test images) after 120 epochs, training for about 11:30 hours.

Despite achieving a great result for a study of case CNN, that architecture shows up it could have its results leveraged by increasing the stack depth (by adding more convolutional layers), or even by increasing the number of filters inside each stack. For deeper networks, ResNet paper proposes one more improvement: "bottleneck" blocks, which would be added to my architecture to leverage its performance.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.

In [18]:
### DONE: select loss function
criterion_scratch = nn.CrossEntropyLoss()

### DONE: select optimizer
optimizer_scratch = optim.Adagrad(model_scratch.parameters(), lr=0.01, weight_decay=0.001)

scheduler_scratch = optim.lr_scheduler.ReduceLROnPlateau(
    optimizer_scratch, mode='min', factor=0.1, patience=14, verbose=True,
    threshold=0.01, threshold_mode='abs', cooldown=0, min_lr=0, eps=1e-08)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.

In [19]:
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda,
          save_path, lr_scheduler=None):
    """returns trained model"""
    # initialize tracker for minimum validation loss
    valid_loss_min = np.Inf 
    
    # define how much epochs to train before run the validation loop
    validate_for_each = 1
    validate_after = 0
    
    train_loss_trace = []
    valid_loss_trace = []
    for epoch in range(1, n_epochs+1):
        # initialize variables to monitor training and validation loss
        train_loss = 0.0
        valid_loss = 0.0
        
        ###################
        # train the model #
        ###################
        model.train()
        for batch_idx, (data, target) in enumerate(dataloaders['train_data']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
                
            ## find the loss and update the model parameters accordingly
            # clear optimizer
            optimizer.zero_grad()
            
            # pass train batch through model feed-forward
            output = model(data)
            
            # calculate the loss fr this train batch
            loss = criterion(output, target)
            
            # do the backpropagation
            loss.backward()
            
            # optimize weights
            optimizer.step()
            
            ## record the average training loss
            train_loss += (1 / (batch_idx + 1)) * (loss.item() - train_loss)
            
            sys.stdout.write(f'\rEpoch: {epoch} '+
                             f'\tTraining Loss: {train_loss:.6f} '+
                             f'\tbatchs: {batch_idx+1}')
            
        train_loss_trace.append(train_loss)
            
        ######################    
        # validate the model #
        ######################
        if epoch > validate_after and epoch % validate_for_each == 0:
            model.eval()
            with torch.no_grad():
                for batch_idx, (data, target) in enumerate(dataloaders['valid_data']):
                    # move to GPU
                    if use_cuda:
                        data, target = data.cuda(), target.cuda()
                        
                    # get predictions for this validation batch
                    output = model(data)
                    
                    ## update the average validation loss
                    # calculate loss for this validation batch
                    loss = criterion(output, target)
                    # Track validation loss
                    valid_loss += (1 / (batch_idx + 1)) * (loss.item() - valid_loss)
                    
                    sys.stdout.write(f'\rEpoch: {epoch} '+
                                     f'\tTraining Loss: {train_loss:.6f} '+
                                     f'\tValidation Loss: {valid_loss:.6f} '+
                                     f'\tbatchs: {batch_idx+1}')
                    
                valid_loss_trace.append(valid_loss)
                
                # print training/validation statistics 
                print(f'\rEpoch: {epoch} '+
                      f'\tTraining Loss: {train_loss:.6f} '+
                      f'\tValidation Loss: {valid_loss:.6f}')
                
                if lr_scheduler:
                    lr_scheduler.step(valid_loss)
                    
            ## DONE: save the model if validation loss has decreased
            if valid_loss < valid_loss_min:
                print(f'\t\t\tValid loss decreasing: '+
                      f'{valid_loss_min:.6f} --> {valid_loss:.6f} ... Saving model')
                
                checkpoint = {'model_state_dict': model.state_dict(),
                              'optimizer_state_dict': optimizer.state_dict(),
                              'train_loss_trace': train_loss_trace,
                              'valid_loss_trace': valid_loss_trace,
                              'epochs': epoch}
                torch.save(checkpoint, save_path)
                
                valid_loss_min = valid_loss
        else:
            valid_loss_trace.append(0)
            print(f'\rEpoch: {epoch} '+
                  f'\tTraining Loss: {train_loss:.6f}')
            
    # return trained model
    return model
In [20]:
# train the model
checkpoint_path = gdrive_dir+'model_scratch.pt'

model_scratch = train(125, dataloaders, model_scratch, optimizer_scratch, 
                      criterion_scratch, use_cuda, checkpoint_path, scheduler_scratch)
Epoch: 1 	Training Loss: 4.951664 	Validation Loss: 4.868338
			Valid loss decreasing: inf --> 4.868338 ... Saving model
Epoch: 2 	Training Loss: 4.741908 	Validation Loss: 4.779368
			Valid loss decreasing: 4.868338 --> 4.779368 ... Saving model
Epoch: 3 	Training Loss: 4.529120 	Validation Loss: 4.597272
			Valid loss decreasing: 4.779368 --> 4.597272 ... Saving model
Epoch: 4 	Training Loss: 4.291046 	Validation Loss: 4.522438
			Valid loss decreasing: 4.597272 --> 4.522438 ... Saving model
Epoch: 5 	Training Loss: 4.139787 	Validation Loss: 4.264545
			Valid loss decreasing: 4.522438 --> 4.264545 ... Saving model
Epoch: 6 	Training Loss: 4.037728 	Validation Loss: 4.078906
			Valid loss decreasing: 4.264545 --> 4.078906 ... Saving model
Epoch: 7 	Training Loss: 3.939118 	Validation Loss: 4.127297
Epoch: 8 	Training Loss: 3.843736 	Validation Loss: 4.387480
Epoch: 9 	Training Loss: 3.716790 	Validation Loss: 4.237972
Epoch: 10 	Training Loss: 3.606373 	Validation Loss: 3.754896
			Valid loss decreasing: 4.078906 --> 3.754896 ... Saving model
Epoch: 11 	Training Loss: 3.510773 	Validation Loss: 3.995260
Epoch: 12 	Training Loss: 3.417307 	Validation Loss: 3.897765
Epoch: 13 	Training Loss: 3.320968 	Validation Loss: 3.884655
Epoch: 14 	Training Loss: 3.202556 	Validation Loss: 4.540680
Epoch: 15 	Training Loss: 3.100250 	Validation Loss: 3.669015
			Valid loss decreasing: 3.754896 --> 3.669015 ... Saving model
Epoch: 16 	Training Loss: 3.003213 	Validation Loss: 3.724133
Epoch: 17 	Training Loss: 2.923750 	Validation Loss: 3.731365
Epoch: 18 	Training Loss: 2.841213 	Validation Loss: 3.762576
Epoch: 19 	Training Loss: 2.752066 	Validation Loss: 3.456081
			Valid loss decreasing: 3.669015 --> 3.456081 ... Saving model
Epoch: 20 	Training Loss: 2.664950 	Validation Loss: 3.137635
			Valid loss decreasing: 3.456081 --> 3.137635 ... Saving model
Epoch: 21 	Training Loss: 2.593346 	Validation Loss: 3.798050
Epoch: 22 	Training Loss: 2.520856 	Validation Loss: 2.898354
			Valid loss decreasing: 3.137635 --> 2.898354 ... Saving model
Epoch: 23 	Training Loss: 2.440272 	Validation Loss: 2.970893
Epoch: 24 	Training Loss: 2.365984 	Validation Loss: 3.137921
Epoch: 25 	Training Loss: 2.268828 	Validation Loss: 2.839536
			Valid loss decreasing: 2.898354 --> 2.839536 ... Saving model
Epoch: 26 	Training Loss: 2.219481 	Validation Loss: 2.684779
			Valid loss decreasing: 2.839536 --> 2.684779 ... Saving model
Epoch: 27 	Training Loss: 2.143125 	Validation Loss: 2.479659
			Valid loss decreasing: 2.684779 --> 2.479659 ... Saving model
Epoch: 28 	Training Loss: 2.079359 	Validation Loss: 2.597667
Epoch: 29 	Training Loss: 2.006464 	Validation Loss: 2.781580
Epoch: 30 	Training Loss: 1.922974 	Validation Loss: 2.747800
Epoch: 31 	Training Loss: 1.859585 	Validation Loss: 2.440710
			Valid loss decreasing: 2.479659 --> 2.440710 ... Saving model
Epoch: 32 	Training Loss: 1.794825 	Validation Loss: 2.192362
			Valid loss decreasing: 2.440710 --> 2.192362 ... Saving model
Epoch: 33 	Training Loss: 1.725007 	Validation Loss: 2.628619
Epoch: 34 	Training Loss: 1.686243 	Validation Loss: 3.299397
Epoch: 35 	Training Loss: 1.620217 	Validation Loss: 2.079341
			Valid loss decreasing: 2.192362 --> 2.079341 ... Saving model
Epoch: 36 	Training Loss: 1.581751 	Validation Loss: 2.308867
Epoch: 37 	Training Loss: 1.542086 	Validation Loss: 2.143754
Epoch: 38 	Training Loss: 1.475350 	Validation Loss: 1.989022
			Valid loss decreasing: 2.079341 --> 1.989022 ... Saving model
Epoch: 39 	Training Loss: 1.457491 	Validation Loss: 2.582336
Epoch: 40 	Training Loss: 1.403866 	Validation Loss: 1.920184
			Valid loss decreasing: 1.989022 --> 1.920184 ... Saving model
Epoch: 41 	Training Loss: 1.369990 	Validation Loss: 2.033796
Epoch: 42 	Training Loss: 1.309092 	Validation Loss: 2.068765
Epoch: 43 	Training Loss: 1.265643 	Validation Loss: 1.783238
			Valid loss decreasing: 1.920184 --> 1.783238 ... Saving model
Epoch: 44 	Training Loss: 1.221233 	Validation Loss: 1.858581
Epoch: 45 	Training Loss: 1.211915 	Validation Loss: 2.017980
Epoch: 46 	Training Loss: 1.144791 	Validation Loss: 1.660459
			Valid loss decreasing: 1.783238 --> 1.660459 ... Saving model
Epoch: 47 	Training Loss: 1.144488 	Validation Loss: 1.707618
Epoch: 48 	Training Loss: 1.105431 	Validation Loss: 1.842678
Epoch: 49 	Training Loss: 1.075615 	Validation Loss: 1.912090
Epoch: 50 	Training Loss: 1.059419 	Validation Loss: 1.724267
Epoch: 51 	Training Loss: 1.022051 	Validation Loss: 1.932049
Epoch: 52 	Training Loss: 0.983942 	Validation Loss: 2.143417
Epoch: 53 	Training Loss: 0.962397 	Validation Loss: 1.534588
			Valid loss decreasing: 1.660459 --> 1.534588 ... Saving model
Epoch: 54 	Training Loss: 0.940642 	Validation Loss: 1.928507
Epoch: 55 	Training Loss: 0.923978 	Validation Loss: 1.709323
Epoch: 56 	Training Loss: 0.899713 	Validation Loss: 1.392409
			Valid loss decreasing: 1.534588 --> 1.392409 ... Saving model
Epoch: 57 	Training Loss: 0.872927 	Validation Loss: 1.608978
Epoch: 58 	Training Loss: 0.866757 	Validation Loss: 1.588305
Epoch: 59 	Training Loss: 0.823590 	Validation Loss: 1.398475
Epoch: 60 	Training Loss: 0.803714 	Validation Loss: 1.700500
Epoch: 61 	Training Loss: 0.794623 	Validation Loss: 1.561655
Epoch: 62 	Training Loss: 0.771908 	Validation Loss: 1.321783
			Valid loss decreasing: 1.392409 --> 1.321783 ... Saving model
Epoch: 63 	Training Loss: 0.735909 	Validation Loss: 1.480193
Epoch: 64 	Training Loss: 0.739664 	Validation Loss: 1.870283
Epoch: 65 	Training Loss: 0.722902 	Validation Loss: 1.629323
Epoch: 66 	Training Loss: 0.703941 	Validation Loss: 1.476112
Epoch: 67 	Training Loss: 0.678257 	Validation Loss: 1.335627
Epoch: 68 	Training Loss: 0.659353 	Validation Loss: 1.402578
Epoch: 69 	Training Loss: 0.668533 	Validation Loss: 1.283935
			Valid loss decreasing: 1.321783 --> 1.283935 ... Saving model
Epoch: 70 	Training Loss: 0.632597 	Validation Loss: 1.327464
Epoch: 71 	Training Loss: 0.633749 	Validation Loss: 1.377922
Epoch: 72 	Training Loss: 0.601250 	Validation Loss: 1.591298
Epoch: 73 	Training Loss: 0.592232 	Validation Loss: 1.560418
Epoch: 74 	Training Loss: 0.585394 	Validation Loss: 1.832903
Epoch: 75 	Training Loss: 0.565121 	Validation Loss: 1.417004
Epoch: 76 	Training Loss: 0.575866 	Validation Loss: 1.502125
Epoch: 77 	Training Loss: 0.557400 	Validation Loss: 1.299996
Epoch: 78 	Training Loss: 0.531179 	Validation Loss: 1.400758
Epoch: 79 	Training Loss: 0.515439 	Validation Loss: 1.551358
Epoch: 80 	Training Loss: 0.526139 	Validation Loss: 1.266868
			Valid loss decreasing: 1.283935 --> 1.266868 ... Saving model
Epoch: 81 	Training Loss: 0.505314 	Validation Loss: 1.360504
Epoch: 82 	Training Loss: 0.498189 	Validation Loss: 1.133483
			Valid loss decreasing: 1.266868 --> 1.133483 ... Saving model
Epoch: 83 	Training Loss: 0.486168 	Validation Loss: 1.622027
Epoch: 84 	Training Loss: 0.488774 	Validation Loss: 1.253820
Epoch: 85 	Training Loss: 0.459105 	Validation Loss: 1.268256
Epoch: 86 	Training Loss: 0.457882 	Validation Loss: 1.274714
Epoch: 87 	Training Loss: 0.441477 	Validation Loss: 1.267358
Epoch: 88 	Training Loss: 0.434724 	Validation Loss: 1.206721
Epoch: 89 	Training Loss: 0.428184 	Validation Loss: 1.189280
Epoch: 90 	Training Loss: 0.426252 	Validation Loss: 1.140527
Epoch: 91 	Training Loss: 0.406832 	Validation Loss: 1.289691
Epoch: 92 	Training Loss: 0.387979 	Validation Loss: 1.139867
Epoch: 93 	Training Loss: 0.382955 	Validation Loss: 1.301558
Epoch: 94 	Training Loss: 0.393753 	Validation Loss: 1.212189
Epoch: 95 	Training Loss: 0.388890 	Validation Loss: 1.283480
Epoch: 96 	Training Loss: 0.379270 	Validation Loss: 1.156617
Epoch: 97 	Training Loss: 0.373077 	Validation Loss: 1.082260
			Valid loss decreasing: 1.133483 --> 1.082260 ... Saving model
Epoch: 98 	Training Loss: 0.360401 	Validation Loss: 1.276866
Epoch: 99 	Training Loss: 0.348237 	Validation Loss: 1.214333
Epoch: 100 	Training Loss: 0.332743 	Validation Loss: 1.469722
Epoch: 101 	Training Loss: 0.337287 	Validation Loss: 1.149485
Epoch: 102 	Training Loss: 0.339379 	Validation Loss: 1.234174
Epoch: 103 	Training Loss: 0.324193 	Validation Loss: 1.194632
Epoch: 104 	Training Loss: 0.342442 	Validation Loss: 1.492833
Epoch: 105 	Training Loss: 0.329445 	Validation Loss: 1.184816
Epoch: 106 	Training Loss: 0.325017 	Validation Loss: 1.621452
Epoch: 107 	Training Loss: 0.311605 	Validation Loss: 1.121114
Epoch: 108 	Training Loss: 0.303231 	Validation Loss: 1.203766
Epoch: 109 	Training Loss: 0.304002 	Validation Loss: 1.332817
Epoch: 110 	Training Loss: 0.293994 	Validation Loss: 1.194260
Epoch: 111 	Training Loss: 0.301909 	Validation Loss: 1.482629
Epoch: 112 	Training Loss: 0.290733 	Validation Loss: 1.201787
Epoch   111: reducing learning rate of group 0 to 1.0000e-03.
Epoch: 113 	Training Loss: 0.243125 	Validation Loss: 0.950062
			Valid loss decreasing: 1.082260 --> 0.950062 ... Saving model
Epoch: 114 	Training Loss: 0.214982 	Validation Loss: 0.940977
			Valid loss decreasing: 0.950062 --> 0.940977 ... Saving model
Epoch: 115 	Training Loss: 0.196663 	Validation Loss: 0.950268
Epoch: 116 	Training Loss: 0.195864 	Validation Loss: 0.938547
			Valid loss decreasing: 0.940977 --> 0.938547 ... Saving model
Epoch: 117 	Training Loss: 0.193325 	Validation Loss: 0.943364
Epoch: 118 	Training Loss: 0.197165 	Validation Loss: 0.948568
Epoch: 119 	Training Loss: 0.188965 	Validation Loss: 0.948170
Epoch: 120 	Training Loss: 0.183989 	Validation Loss: 0.928225
			Valid loss decreasing: 0.938547 --> 0.928225 ... Saving model
Epoch: 121 	Training Loss: 0.192438 	Validation Loss: 0.937986
Epoch: 122 	Training Loss: 0.181616 	Validation Loss: 0.935121
Epoch: 123 	Training Loss: 0.187639 	Validation Loss: 0.944064
Epoch: 124 	Training Loss: 0.178522 	Validation Loss: 0.940306
Epoch: 125 	Training Loss: 0.183996 	Validation Loss: 0.941136
In [21]:
# load the model that got the best validation accuracy
checkpoint_path = gdrive_dir+'model_scratch.pt'
checkpoint = torch.load(checkpoint_path)
model_scratch.load_state_dict(checkpoint['model_state_dict'])
optimizer_scratch.load_state_dict(checkpoint['optimizer_state_dict'])

print('Loading trained model ... Epochs: {}\tTraining Loss: {}\t Valid Loss: {}'.format(
        checkpoint['epochs'], checkpoint['train_loss_trace'][-1],
        checkpoint['valid_loss_trace'][-1] ))
Loading trained model ... Epochs: 120	Training Loss: 0.1839889748820237	 Valid Loss: 0.9282254934310913
In [22]:
# Plot the training and validation losses
def plot_training_results(checkpoint):
    fig, ax = plt.subplots(figsize=(30,8))

    ax.set_facecolor('w')
    ax.grid(True, color='lightgray')

    ax.plot(checkpoint['train_loss_trace'], label='Training Loss')
    ax.plot(checkpoint['valid_loss_trace'], label='Validation Loss')

    ax.set_xlabel('Epochs')
    ax.set_ylabel('Loss')

    ax.legend(frameon=True, facecolor='w')

    plt.show()
In [23]:
plot_training_results(checkpoint)

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.

In [24]:
def test(loaders, model, criterion, use_cuda):

    # monitor test loss and accuracy
    test_loss = 0.
    correct = 0.
    total = 0.
    
    class_accuracy = torch.zeros(133)
    class_size = torch.zeros(133)
    
    model.eval()
    for batch_idx, (data, target) in enumerate(loaders['test_data']):
        # move to GPU
        if use_cuda:
            data, target = data.cuda(), target.cuda()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # update average test loss 
        test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
        # convert output probabilities to predicted class
        pred = output.data.max(1, keepdim=True)[1]
        
        for label, prediction in zip(target.data, pred.squeeze()):
            label, prediction = label.item(), prediction.item()
            class_accuracy[label] += 1 if label == prediction else 0
            class_size[label] += 1
        
    correct, total = int(sum(class_accuracy)), int(sum(class_size))
    
    print('Test Loss: {:.6f}\n'.format(test_loss))
    
    print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
        100. * correct / total, correct, total))
    
    class_accuracy = class_accuracy/class_size
    class_accuracy = class_accuracy.view(7,19)
    
    fig, ax = plt.subplots(figsize=(30,8))
    ax.set_axis_off()
    ax.grid(False)
    
    im = ax.imshow(class_accuracy, cmap='RdYlGn')
    
    cbar = ax.figure.colorbar(im, ax=ax)
    cbar.ax.set_ylabel('accuracy', rotation=-90, va="bottom")

    for i in range(class_accuracy.shape[0]):
        for j in range(class_accuracy.shape[1]):
            text_color = 'w' if class_accuracy[i, j] > 0.6 else 'k'
            ax.text(j, i,
                    f'class: {j+i*19}\n{class_accuracy[i, j]*100:.0f}%',
                    ha="center", va="center", color=text_color)
    
In [25]:
# call test function
test(dataloaders, model_scratch, criterion_scratch, use_cuda)
Test Loss: 1.014253


Test Accuracy: 74% (623/836)

Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)

You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).

If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.

In [26]:
## DONE: Specify data loaders
# I have decided to use the same data loaders from the previous step.
# Doing this way, I can compare results from my network from scratch
# with the results from transfer learning.

(IMPLEMENTATION) Model Architecture

Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.

In [27]:
## DONE: Specify model architecture 
model_transfer = models.resnet152(pretrained=True)

# do not track gradients for the model parameters
for param in model_transfer.parameters():
    param.requires_grad = False
    
fc_in_features = model_transfer.fc.in_features
fc_out_classes = 133

model_transfer.fc = nn.Linear(fc_in_features, fc_out_classes, bias=True)

if use_cuda:
    model_transfer = model_transfer.cuda()
Downloading: "https://download.pytorch.org/models/resnet152-b121ed2d.pth" to /root/.torch/models/resnet152-b121ed2d.pth
241530880it [00:02, 88678047.56it/s]

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answer:

Since I have built my CNN from scratch inspired by the ResNet paper, I think I should use a pre-trained ResNet to build my transfer learning model. Doing this, I could compare the results from my model from scratch with the results of my transfer learning model.

Based on what I have learned in Lesson 3, I decided to change the last layer only (the fully connected layer). The pre-trained model has its weights trained on the ImageNet, a data set containing over 10 million images. My training data set contains 6680 images. Therefore, I realized I had no samples enough to perform a fine-tuning in the weights of the pre-trained network. Also, since the ImageNet covers the dog identification, the pre-trained model has already all information needed to identify the higher level features and to classify them into the dog breeds.

Thus, I froze all the pre-trained weights because I will not be training them. I merely replaced the classifier layer by using a fully connected layer, which receives the convolutional network output and generates the dog breed predictions. Then, I trained the network to update the weights of the new classifier.

At first, I chose to use the ResNet-18, because, among the other ResNet models, it has the most similar architecture in comparison with my model from scratch. After 30 epochs, my transfer learning model with ResNet-18 reached 83% accuracy (700 of 836 test images).

Since the ResNet-18 has been training in a huge training data set, I previously expected that it would perform better than my model from scratch. Furthermore, its architecture has been tested and improved for a long time. Despite that, I am glad to see my model from scratch performing not so worse than a state of the art model.

To the end, I changed the pre-trained network to the ResNet-152. I did that because I would like to compare the previous results with the heaviest ResNet available among the TorchVision models. As expected, ResNet-152 performed better than ResNet-18, but, surprisingly, not extraordinarily better. After 30 epochs, my transfer learning model with ResNet-152 reached 89% accuracy (746 of 836 test images). That same model, after 100 epochs, reached a slightly better result, with the same 89% accuracy, but predicting accurately only 4 more images (750 of 836 test images).

Of course, many other features could have its parameters fine-tuned, such as the optimizer, the learning rate, the training data augmentation, and so on. Maybe, fine-tuning those parameters could unlock all the ResNet-152 potential.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.

In [28]:
### DONE: select loss function
criterion_transfer = nn.CrossEntropyLoss()

### DONE: select optimizer
optimizer_transfer = optim.Adagrad(model_transfer.fc.parameters(), lr=0.01, weight_decay=0.001)

scheduler_transfer = optim.lr_scheduler.ReduceLROnPlateau(
    optimizer_transfer, mode='min', factor=0.1, patience=9, verbose=True,
    threshold=0.01, threshold_mode='abs', cooldown=0, min_lr=0, eps=1e-08)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.

In [29]:
# Define the checkpoint path
checkpoint_path = gdrive_dir+'model_transfer.pt'
In [30]:
# train the model
model_transfer = train(100, dataloaders, model_transfer, optimizer_transfer,
                       criterion_transfer, use_cuda, checkpoint_path, scheduler_transfer)
Epoch: 1 	Training Loss: 3.246167 	Validation Loss: 1.087314
			Valid loss decreasing: inf --> 1.087314 ... Saving model
Epoch: 2 	Training Loss: 0.837386 	Validation Loss: 0.675057
			Valid loss decreasing: 1.087314 --> 0.675057 ... Saving model
Epoch: 3 	Training Loss: 0.620059 	Validation Loss: 0.557914
			Valid loss decreasing: 0.675057 --> 0.557914 ... Saving model
Epoch: 4 	Training Loss: 0.535117 	Validation Loss: 0.502107
			Valid loss decreasing: 0.557914 --> 0.502107 ... Saving model
Epoch: 5 	Training Loss: 0.484228 	Validation Loss: 0.475766
			Valid loss decreasing: 0.502107 --> 0.475766 ... Saving model
Epoch: 6 	Training Loss: 0.436408 	Validation Loss: 0.433033
			Valid loss decreasing: 0.475766 --> 0.433033 ... Saving model
Epoch: 7 	Training Loss: 0.404096 	Validation Loss: 0.418810
			Valid loss decreasing: 0.433033 --> 0.418810 ... Saving model
Epoch: 8 	Training Loss: 0.376884 	Validation Loss: 0.419488
Epoch: 9 	Training Loss: 0.359159 	Validation Loss: 0.411565
			Valid loss decreasing: 0.418810 --> 0.411565 ... Saving model
Epoch: 10 	Training Loss: 0.350716 	Validation Loss: 0.401688
			Valid loss decreasing: 0.411565 --> 0.401688 ... Saving model
Epoch: 11 	Training Loss: 0.333217 	Validation Loss: 0.398045
			Valid loss decreasing: 0.401688 --> 0.398045 ... Saving model
Epoch: 12 	Training Loss: 0.319005 	Validation Loss: 0.374675
			Valid loss decreasing: 0.398045 --> 0.374675 ... Saving model
Epoch: 13 	Training Loss: 0.304804 	Validation Loss: 0.378085
Epoch: 14 	Training Loss: 0.302336 	Validation Loss: 0.362434
			Valid loss decreasing: 0.374675 --> 0.362434 ... Saving model
Epoch: 15 	Training Loss: 0.290125 	Validation Loss: 0.361295
			Valid loss decreasing: 0.362434 --> 0.361295 ... Saving model
Epoch: 16 	Training Loss: 0.285517 	Validation Loss: 0.365483
Epoch: 17 	Training Loss: 0.276770 	Validation Loss: 0.357860
			Valid loss decreasing: 0.361295 --> 0.357860 ... Saving model
Epoch: 18 	Training Loss: 0.274349 	Validation Loss: 0.346730
			Valid loss decreasing: 0.357860 --> 0.346730 ... Saving model
Epoch: 19 	Training Loss: 0.269835 	Validation Loss: 0.351469
Epoch: 20 	Training Loss: 0.259987 	Validation Loss: 0.342063
			Valid loss decreasing: 0.346730 --> 0.342063 ... Saving model
Epoch: 21 	Training Loss: 0.263216 	Validation Loss: 0.351233
Epoch: 22 	Training Loss: 0.252027 	Validation Loss: 0.350294
Epoch: 23 	Training Loss: 0.248045 	Validation Loss: 0.352901
Epoch: 24 	Training Loss: 0.243323 	Validation Loss: 0.352130
Epoch: 25 	Training Loss: 0.235602 	Validation Loss: 0.340536
			Valid loss decreasing: 0.342063 --> 0.340536 ... Saving model
Epoch: 26 	Training Loss: 0.236903 	Validation Loss: 0.340874
Epoch: 27 	Training Loss: 0.234823 	Validation Loss: 0.347353
Epoch: 28 	Training Loss: 0.230236 	Validation Loss: 0.339018
Epoch    27: reducing learning rate of group 0 to 1.0000e-03.
			Valid loss decreasing: 0.340536 --> 0.339018 ... Saving model
Epoch: 29 	Training Loss: 0.222378 	Validation Loss: 0.336041
			Valid loss decreasing: 0.339018 --> 0.336041 ... Saving model
Epoch: 30 	Training Loss: 0.218952 	Validation Loss: 0.331957
			Valid loss decreasing: 0.336041 --> 0.331957 ... Saving model
Epoch: 31 	Training Loss: 0.212902 	Validation Loss: 0.333057
Epoch: 32 	Training Loss: 0.217805 	Validation Loss: 0.334787
Epoch: 33 	Training Loss: 0.217960 	Validation Loss: 0.331337
			Valid loss decreasing: 0.331957 --> 0.331337 ... Saving model
Epoch: 34 	Training Loss: 0.214278 	Validation Loss: 0.332915
Epoch: 35 	Training Loss: 0.216231 	Validation Loss: 0.332540
Epoch: 36 	Training Loss: 0.216266 	Validation Loss: 0.332636
Epoch: 37 	Training Loss: 0.210613 	Validation Loss: 0.331945
Epoch: 38 	Training Loss: 0.213812 	Validation Loss: 0.332559
Epoch: 39 	Training Loss: 0.224270 	Validation Loss: 0.330716
Epoch    38: reducing learning rate of group 0 to 1.0000e-04.
			Valid loss decreasing: 0.331337 --> 0.330716 ... Saving model
Epoch: 40 	Training Loss: 0.214896 	Validation Loss: 0.329996
			Valid loss decreasing: 0.330716 --> 0.329996 ... Saving model
Epoch: 41 	Training Loss: 0.215009 	Validation Loss: 0.331683
Epoch: 42 	Training Loss: 0.215666 	Validation Loss: 0.329143
			Valid loss decreasing: 0.329996 --> 0.329143 ... Saving model
Epoch: 43 	Training Loss: 0.205044 	Validation Loss: 0.331317
Epoch: 44 	Training Loss: 0.211191 	Validation Loss: 0.330404
Epoch: 45 	Training Loss: 0.212015 	Validation Loss: 0.331724
Epoch: 46 	Training Loss: 0.211268 	Validation Loss: 0.329537
Epoch: 47 	Training Loss: 0.203100 	Validation Loss: 0.331433
Epoch: 48 	Training Loss: 0.214244 	Validation Loss: 0.331949
Epoch: 49 	Training Loss: 0.201871 	Validation Loss: 0.330599
Epoch    48: reducing learning rate of group 0 to 1.0000e-05.
Epoch: 50 	Training Loss: 0.209931 	Validation Loss: 0.331327
Epoch: 51 	Training Loss: 0.210383 	Validation Loss: 0.329271
Epoch: 52 	Training Loss: 0.210749 	Validation Loss: 0.330848
Epoch: 53 	Training Loss: 0.218279 	Validation Loss: 0.331730
Epoch: 54 	Training Loss: 0.209296 	Validation Loss: 0.331865
Epoch: 55 	Training Loss: 0.206914 	Validation Loss: 0.332496
Epoch: 56 	Training Loss: 0.209287 	Validation Loss: 0.330717
Epoch: 57 	Training Loss: 0.206190 	Validation Loss: 0.330425
Epoch: 58 	Training Loss: 0.213110 	Validation Loss: 0.331438
Epoch: 59 	Training Loss: 0.209054 	Validation Loss: 0.329715
Epoch    58: reducing learning rate of group 0 to 1.0000e-06.
Epoch: 60 	Training Loss: 0.209525 	Validation Loss: 0.331362
Epoch: 61 	Training Loss: 0.211733 	Validation Loss: 0.331498
Epoch: 62 	Training Loss: 0.213110 	Validation Loss: 0.330280
Epoch: 63 	Training Loss: 0.210623 	Validation Loss: 0.331959
Epoch: 64 	Training Loss: 0.217347 	Validation Loss: 0.332791
Epoch: 65 	Training Loss: 0.215871 	Validation Loss: 0.331017
Epoch: 66 	Training Loss: 0.216783 	Validation Loss: 0.331952
Epoch: 67 	Training Loss: 0.208052 	Validation Loss: 0.332858
Epoch: 68 	Training Loss: 0.211692 	Validation Loss: 0.330372
Epoch: 69 	Training Loss: 0.215362 	Validation Loss: 0.329389
Epoch    68: reducing learning rate of group 0 to 1.0000e-07.
Epoch: 70 	Training Loss: 0.211281 	Validation Loss: 0.331099
Epoch: 71 	Training Loss: 0.209235 	Validation Loss: 0.330792
Epoch: 72 	Training Loss: 0.216351 	Validation Loss: 0.331115
Epoch: 73 	Training Loss: 0.215278 	Validation Loss: 0.329099
			Valid loss decreasing: 0.329143 --> 0.329099 ... Saving model
Epoch: 74 	Training Loss: 0.210130 	Validation Loss: 0.330573
Epoch: 75 	Training Loss: 0.213615 	Validation Loss: 0.329275
Epoch: 76 	Training Loss: 0.211166 	Validation Loss: 0.333320
Epoch: 77 	Training Loss: 0.208299 	Validation Loss: 0.330086
Epoch: 78 	Training Loss: 0.217767 	Validation Loss: 0.335429
Epoch: 79 	Training Loss: 0.212164 	Validation Loss: 0.333401
Epoch    78: reducing learning rate of group 0 to 1.0000e-08.
Epoch: 80 	Training Loss: 0.210128 	Validation Loss: 0.330320
Epoch: 81 	Training Loss: 0.215239 	Validation Loss: 0.331374
Epoch: 82 	Training Loss: 0.212292 	Validation Loss: 0.330370
Epoch: 83 	Training Loss: 0.213040 	Validation Loss: 0.333726
Epoch: 84 	Training Loss: 0.210055 	Validation Loss: 0.334057
Epoch: 85 	Training Loss: 0.210928 	Validation Loss: 0.328295
			Valid loss decreasing: 0.329099 --> 0.328295 ... Saving model
Epoch: 86 	Training Loss: 0.204096 	Validation Loss: 0.327461
			Valid loss decreasing: 0.328295 --> 0.327461 ... Saving model
Epoch: 87 	Training Loss: 0.214287 	Validation Loss: 0.330152
Epoch: 88 	Training Loss: 0.211969 	Validation Loss: 0.331960
Epoch: 89 	Training Loss: 0.218578 	Validation Loss: 0.331503
Epoch: 90 	Training Loss: 0.211960 	Validation Loss: 0.328159
Epoch: 91 	Training Loss: 0.215107 	Validation Loss: 0.330688
Epoch: 92 	Training Loss: 0.213615 	Validation Loss: 0.331101
Epoch: 93 	Training Loss: 0.210853 	Validation Loss: 0.330915
Epoch: 94 	Training Loss: 0.211546 	Validation Loss: 0.330606
Epoch: 95 	Training Loss: 0.211472 	Validation Loss: 0.328796
Epoch: 96 	Training Loss: 0.206619 	Validation Loss: 0.331702
Epoch: 97 	Training Loss: 0.211765 	Validation Loss: 0.330285
Epoch: 98 	Training Loss: 0.202847 	Validation Loss: 0.333180
Epoch: 99 	Training Loss: 0.212251 	Validation Loss: 0.332009
Epoch: 100 	Training Loss: 0.212281 	Validation Loss: 0.329745
In [31]:
# load the model that got the best validation accuracy

checkpoint = torch.load(checkpoint_path)

model_transfer.load_state_dict(checkpoint['model_state_dict'])
optimizer_transfer.load_state_dict(checkpoint['optimizer_state_dict'])

print('Loading trained model ... Epochs: {}\tTraining Loss: {}\t Valid Loss: {}'.format(
        checkpoint['epochs'], checkpoint['train_loss_trace'][-1],
        checkpoint['valid_loss_trace'][-1] ))
Loading trained model ... Epochs: 86	Training Loss: 0.20409625183258737	 Valid Loss: 0.3274605095386505
In [32]:
plot_training_results(checkpoint)

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.

In [33]:
test(dataloaders, model_transfer, criterion_transfer, use_cuda)
Test Loss: 0.361601


Test Accuracy: 89% (750/836)

(IMPLEMENTATION) Predict Dog Breed with the Model

Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.

In [34]:
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_sample_folders = ['dogImages/test/'+item for item in dataloaders['train_data'].dataset.classes]
class_names = [item[4:].replace("_", " ") for item in dataloaders['train_data'].dataset.classes]
In [35]:
### DONE: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.

def predict_breed_transfer(img_path):
    # load the image and return the predicted breed
    img = Image.open(img_path)
    
    img = data_transforms['valid_test'](img)
    img = img[None,:]
    
    if next(model_transfer.parameters()).is_cuda:
        img = img.cuda()
        
    prediction = model_transfer(img)
    prediction = nn.Softmax(dim=1)(prediction)
    
    top_prob, top_class = prediction.topk(5, dim=1)
    top_prob, top_class = top_prob.squeeze(), top_class.squeeze()
    
    breeds = [(class_names[c.item()], p.item(), c.item())
              for p, c in zip(top_prob, top_class)]
    
    return breeds
In [36]:
rows = 6
cols = 4

dog_samples_idx = torch.randint(low=0, high=len(dog_files), size=(rows*cols,))

paths = [dog_files[idx] for idx in dog_samples_idx]
labels = [path.split('/')[2][4:].replace("_", " ") for path in paths]
predictions = [predict_breed_transfer(path) for path in paths]

fig, axs = plt.subplots(nrows=rows*2, ncols=cols, constrained_layout=True, figsize=(15, 28))
axs = axs.flatten()

row_ctrl = -1
rows *= 2

for pos, (path, label, prediction) in enumerate(zip(paths, labels, predictions)):
    if pos % cols == 0:
        row_ctrl += 1
        
    img = Image.open(path)
    
    title = f'{label}'    
    title_color = 'green' if label == prediction[0][0] else 'red'

    position = pos + row_ctrl*cols
    ax1 = axs[position]
    ax1.imshow(img)
    ax1.set_title(title, color=title_color)
    ax1.grid(False)
    ax1.set_yticks([])
    ax1.set_xticks([])


    breed_label = []
    breed_prob = []
    for breed in prediction: 
        breed_label.append(breed[0])
        breed_prob.append(breed[1])

    ax2 = axs[position+cols]
    ax2.barh(np.arange(5), breed_prob)
    ax2.set_yticks(np.arange(5))
    ax2.set_yticklabels(breed_label)
    ax2.set_ylim(-1, 5)
    ax2.invert_yaxis()
    ax2.set_xlim(0, 1.1)
    
    
plt.show()

Step 5: Write your Algorithm

Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and human_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.

Some sample output for our algorithm is provided below, but feel free to design your own user experience!

Sample Human Output

(IMPLEMENTATION) Write your Algorithm

In [37]:
### DONE: Write your algorithm.
### Feel free to use as many code cells as needed.

# Set the threshold to consider as being purebred
purebred_threshold = 0.90

def run_app(img_path):
    ## handle cases for a human face, dog, and neither

    fig, axs = plt.subplots(nrows=1, ncols=2, constrained_layout=True,
                            figsize=(10, 4), linewidth=4.0, edgecolor='k')
    axs = axs.flatten()
    
    # show the input image
    img_height = 224
    
    img = Image.open(img_path)
    w, h = img.size
    img = img.resize((int(w/h*img_height), img_height))
    
    axs[0].imshow(img)
    axs[0].set_facecolor('w')
    axs[0].set_axis_off()
    axs[0].grid(False)    
    
    
    # describe the input image
    title = None
    title_color = 'blue'
    
    if dog_detector(img_path):
        title = 'So cute dog!'

        breeds = predict_breed_transfer(img_path)
        
        if breeds[0][1] >= purebred_threshold:
            title += '\nI see you as a purebred!'
            axs[1].set_title(breeds[0][0].upper(), color=title_color,
                             fontsize='xx-large', fontweight='bold')
            
            purebred_img = Image.open('purebred.png')
            
            axs[1].imshow(purebred_img)
            axs[1].set_facecolor('w')
            axs[1].set_axis_off()
            axs[1].grid(False)    
        else:
            axs[1].set_title('I see you as a crossbred!',
                             color=title_color, fontsize='xx-large')
            
            breeds = list(zip(*breeds))
            
            axs[1].barh(np.arange(5), breeds[1])
            axs[1].set_yticks(np.arange(5))
            axs[1].set_yticklabels(breeds[0])
            axs[1].set_ylim(-0.5, 4.5)
            axs[1].invert_yaxis()
            axs[1].set_xlim(0, 1.05)
            
    elif face_detector(img_path) > 0:
        breeds = predict_breed_transfer(img_path)
        
        title = 'Hey human! For me, you look like a'
        axs[1].set_title(breeds[0][0].upper(),
                         color=title_color, fontsize='xx-large')
        
        class_id = breeds[0][2]
        folder = class_sample_folders[class_id]
        
        files_list = os.listdir(folder)
        img_idx = np.random.randint(len(files_list))
        
        sample_image_path = folder+'/'+files_list[img_idx]
        
        sample_image = Image.open(sample_image_path)
        
        w, h = sample_image.size
        sample_image = sample_image.resize((int(w/h*img_height), img_height))
        
        axs[1].imshow(sample_image)
        axs[1].set_axis_off()
        axs[1].grid(False)    
    else:
        title = 'Sorry, I could detect neither dogs or humans in this image.'
        title_color = 'red'
        
        no_dogs_img = Image.open('no_dogs_detected.png')
        
        axs[1].imshow(no_dogs_img)
        axs[1].set_facecolor('w')
        axs[1].set_axis_off()
        axs[1].grid(False)    
        
    axs[0].set_title(title, color=title_color, fontsize='xx-large')
    plt.show()
In [38]:
# Predictions for dogs
dog_samples_idx = torch.randint(low=0, high=len(dog_files), size=(10,))

for idx in dog_samples_idx:
    run_app(dog_files[idx])
    print()










In [39]:
# Predictions for humans
human_samples_idx = torch.randint(low=0, high=len(human_files), size=(10,))

for idx in human_samples_idx:
    run_app(human_files[idx])
    print()











Step 6: Test Your Algorithm

In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

(IMPLEMENTATION) Test Your Algorithm on Sample Images!

Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answer: (Three possible points for improvement)

I had a good time while developing this project.

I have worked hard to develop my CNN from scratch, but it worthed each minute that I spent on it! It is very gratifying to see my model performing so well.
However, I reached a better result by using transfer learning, as expected.

Seeing the resembling dog breed for a human image in comparison with a real dog image, made my day!

Despite that, my final result did not perform as well as I expected while predicting my photos.
It was able to detect correctly photos where had or not dogs, but it was not able to predict the dog breed of dogs in my photos. That upset me a little. I don't know what went wrong. If the color saturation in the photos, the position of the dogs, or something else. I should run more tests with more images to identify potential causes to the mismatching of my final model.

Possible points of improvement of my algorithm:

  • Make better predictions to real photos of dogs
  • Enhance the training and testing phases by plotting graphs of how the model is performing
  • Detect both humans and dogs if the image contains either humans and dogs
  • If the image includes more than one dog, the algorithm should predict the breed for each one
  • If the picture includes more than one human face, the algorithm should predict the resembling dog breed for each one
  • If the human in the photo looks like being a crossbreed dog, the algorithm should present that information
  • The application could allow the use of the cell phone camera (or a webcam on PC) to get images, besides to allow the upload of photos.
In [40]:
## DONE: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.

run_app('johnny.jpg')
print()
run_app('blue.jpg')
print()
run_app('silvio.jpg')
print()
run_app('luciana.jpg')
print()
run_app('cidinha.jpg')
print()
run_app('silviao.jpg')
print()
run_app('graziela.jpg')
print()
run_app('marcelo.jpg')
print()
run_app('vicky.jpg')
print()
run_app('taco.jpg')
print()
run_app('sealion.jpg')
print()
run_app('madonna_1.jpg')
print()
run_app('madonna_2.jpg')
print()
run_app('Fred_the_Bassador.jpg')
print()
run_app('Pomsky_Dog_Breed_-_Pomeranian_Husky_Mix.jpg')
print()
run_app('Huskamute_facial_expression.jpg')
print()
run_app('cat_adriana.jpg')
print()